-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add end-to-end test report generation (and minor refactoring) #194
Conversation
GH test summary can be seen here: For some reason Github renders the logs in Edit: it seems whitespace handling is kinda broken on the summary page, though the markdown gets rendered differently if I paste it into a Gist, for example. 🤦 |
Right. I had a go at TAP/bats for something else some time ago, and walked way from it. It always feels like it should be good, though. Maybe something hasn't clicked yet. Help me understand the implications for our codebase. If you had done it via bats/TAP, would we have used a library that would have generated/help generate the right output? And then the tool would have transformed that to some summary? Or would we have still generated the right format of output ourselves, and only apply the tool to create the summary? Maybe the only extra tax is creating that markdown report manually? Output in your failed test looks like a good thing! On |
The BATS approach was:
Issues with this:
3 + 4 means that if anything fails, there's no good way to see an overview in the console, because individual test outcomes are interspersed by the failing test logs. If it were only for the GH summary report, I would say that yeah, BATS + post-processing the XML is probably a better option. But now it just seems like a big indirection with questionable benefits. |
Log formatting is now properly preserved in the summary page: |
0973a62
to
e28983f
Compare
Rebased with |
99d930a
to
2994c38
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is good to merge? I rebased and dropped the sample error commit.
passthru.tests
(i.e.nix-build -A test <..> default.nix
) never fails now, instead the exist status is stored in$out/status
(see build).
This seems relatively unproblematic for our current needs. We do run them in CI. I guess it might surprise any consumers of the top-level package, but to our knowledge the only "external" consumer is also us.
Yes. I am just thinking that it would be also nice if the CLI summary report was re-used for integration tests as well. We have 9 of them now and the current for-loop fail-on-first-err in |
👍
👍 |
My attempt to produce a nice e2e test summary / report has turned into a ridiculous yak-shaving mission.
Some background:
So... decided to roll manually 🤷
I am not very pleased with the outcome, but it's still better than nothing. If you have any ideas on how to do this in a neater way I am all ears.
Some points to note:
passthru.tests
(i.e.nix-build -A test <..> default.nix
) never fails now, instead the exist status is stored in$out/status
(see build). I guess this could be avoided byusing --keep-failed
instead and getting the test run artifacts from the temporary build dir?./result
with naming based on their source filepath, i.e.:Example of a test summary printed to the console:
Checklist